| Name | Version | Summary | date |
| pyrasp |
0.9.1 |
Python RASP |
2025-10-25 04:56:09 |
| haliosai |
1.0.5 |
Advanced Guardrails and Evaluation SDK for AI Agents |
2025-10-23 18:05:33 |
| semfire |
0.3.0 |
SemFire (Semantic Firewall): detect advanced AI deception, including in-context scheming and multi-turn manipulative attacks. |
2025-10-21 12:31:58 |
| pan-mcp-relay |
0.0.5b1 |
Palo Alto Networks AI Security MCP Relay |
2025-08-29 03:42:21 |
| llama-index-packs-zenguard |
0.4.0 |
llama-index packs zenguard integration |
2025-07-30 20:52:17 |
| sonnylabs |
0.1.2 |
Python client for the SonnyLabs AI Security Scanner - Test your AI applications for prompt injection vulnerabilities |
2025-07-26 15:05:12 |
| mseep-agentic-security |
0.7.4 |
Agentic LLM vulnerability scanner |
2025-07-17 09:44:34 |
| llm-agent-protector |
0.1.0 |
Polymorphic Prompt Assembler to protect LLM agents from prompt injection and prompt leak |
2025-07-10 23:16:57 |
| agentic_security |
0.4.5 |
Agentic LLM vulnerability scanner |
2025-02-15 11:36:15 |
| agentdojo |
0.1.26 |
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents |
2025-02-12 08:29:46 |
| llama-index-packs-llama-guard-moderator |
0.3.0 |
llama-index packs llama_guard_moderator integration |
2024-11-17 22:42:41 |
| prompt-protect |
0.1 |
An NLP classification for detecting prompt injection |
2024-09-02 22:57:55 |
| llm-guard |
0.3.15 |
LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure. |
2024-08-22 19:39:48 |
| langalf |
0.0.4 |
Agentic LLM vulnerability scanner |
2024-04-15 12:40:16 |
| last_layer |
0.1.32 |
Ultra-fast, Low Latency LLM security solution |
2024-04-05 12:38:46 |